Introspective Reasoning in a Case -
نویسندگان
چکیده
Many current AI systems assume that the reasoning mechanisms used to manipulate their knowledge may be xed ahead of time by the designer. This assumption may break down in complex domains. The focus of this research is developing a model of introspective reasoning and learning to enable a system to improve its own reasoning as well as its domain knowledge. Our model is based on the proposal of (Birnbaum et al. 1991) to use a model of the ideal behavior of a case-based system to judge system performance and to reene its reasoning mechanisms; it also draws on the research of (Ram & Cox 1994) on introspective failure-driven learning. This work examines introspection guided by expectation failures about reasoning performance. We are developing a vocabulary of failures for the case-based system, an introspective reasoner which uses a hierarchical model of system behavior, and a method of reusing CBR for parts of the case-based planner itself. The system we are developing combines a model-based introspective reasoner with a case-based planning system. The planner generates high-level plans for navigating city streets, and is similar in structure to the planner CHEF (Hammond 1989). However, we implement components of the planner using the case-based reasoning mechanisms of the planner as a whole. Our primary interest in this approach is the advantage it ooers for developing the model for introspective reasoning. We can reuse expectations that apply to the planner as a whole for its case-based parts. During the planning process, the introspective rea-soner compares the planner's reasoning to its assertions about ideal behavior. When a failure is detected, for instance if the system judges that the retrieved case is not the \best" case in memory, the introspective rea-soner considers related assertions to pinpoint the source of the failure and to suggest a solution. In this case our system creates a new index to distinguish the true best case from the bad retrieved case. Determining what information to include in the model and how to structure it are central issues. Birn-baum's model is a set of high-level assertions applicable to many case-based planners (Birnbaum et al. 1991). While such assertions cover a wide range of failures, they are too general to easily specify causes or repairs for failures. We propose as an alternative a hierarchical model including highly abstract assertions as well as assertions speciic to this planner. Low-level assertions help to notice failures and …
منابع مشابه
Reflective Introspective Reasoning Through CBR
In recent years, “introspective reasoning” systems have been developed to model the ability to reason about one’s own reasoning performance. This research examines “reflective” introspective reasoning: introspecting about the introspective reasoning process, itself. We introduce a reflective introspective reasoning system that uses case-based reasoning (CBR) as its central reasoning method. We ...
متن کاملCombining Case-based Planning and Introspective Reasoning
There is much current interest in introspective reasoning, reasoning about reasoning processes themselves. One application of introspective reasoning is to detect aws in a system's own reasoning, and to reene its reasoning methods to correct those aws. We propose a framework for performing such introspective reenement, and describe its implementation in a system which combines introspective lea...
متن کاملA New Model of Reflective Introspective Learning
Systems which introspect about their own processes can improve their reasoning behavior in response to experience using "introspective learning" techniques. Many systems which perform introspective learning analyze and change only an underlying domain task’s reasoning processes. They do not possess the ability to r~ flectively introspect about the introspective task itself. We present a model o...
متن کاملIntrospective Reasoning in a Case-Based Planner
Many current AI systems assume that the reasoning mechanisms used to manipulate their knowledge may be fixed ahead of time by the designer. This assumption may break down in complex domains. The focus of this research is developing a model of introspective reasoning and learning to enable a system to improve its own reasoning as well as its domain knowledge. Our model is based on the proposal o...
متن کاملLearning to Refine Indexing by Introspective Reasoning
A significant problem for case-based reasoning (CBR) systems is determining the features to use in judging case similarity for retrieval. We describe research that addresses the feature selection problem by using introspective reasoning to learn new features for indexing. Our method augments the CBR system with an introspective reasoning component which monitors system performance to detect poo...
متن کاملUsing Introspective Reasoning to Refine Indexing
Introspective reasoning about a system's own reasoning processes can form the basis for learning to reene those reasoning processes. The ROBBIE 1 system uses introspective reasoning to monitor the retrieval process of a case-based planner to detect retrieval of inappropriate cases. When retrieval problems are detected, the source of the problems is explained and the explanations are used to det...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1994